233 research outputs found

    Cure of chronic viral infection and virus-induced type 1 diabetes by neutralizing antibodies

    Get PDF
    The use of neutralizing antibodies is one of the most successful methods to interfere with receptor–ligand interactions in vivo. In particular blockade of soluble inflammatory mediators or their corresponding cellular receptors was proven an effective way to regulate inflammation and/or prevent its negative consequences. However, one problem that comes along with an effective neutralization of inflammatory mediators is the general systemic immunomodulatory effect. It is, therefore, important to design a treatment regimen in a way to strike at the right place and at the right time in order to achieve maximal effects with minimal duration of immunosuppression or hyperactivation. In this review, we reflect on two examples of how short time administration of such neutralizing antibodies can block two distinct inflammatory consequences of viral infection. First, we review recent findings that blockade of IL-10/IL-10R interaction can resolve chronic viral infection and second, we reflect on how neutralization of the chemokine CXCL10 can abrogate virus-induced type 1 diabetes

    Generating and auto-tuning parallel stencil codes

    Get PDF
    In this thesis, we present a software framework, Patus, which generates high performance stencil codes for different types of hardware platforms, including current multicore CPU and graphics processing unit architectures. The ultimate goals of the framework are productivity, portability (of both the code and performance), and achieving a high performance on the target platform. A stencil computation updates every grid point in a structured grid based on the values of its neighboring points. This class of computations occurs frequently in scientific and general purpose computing (e.g., in partial differential equation solvers or in image processing), justifying the focus on this kind of computation. The proposed key ingredients to achieve the goals of productivity, portability, and performance are domain specific languages (DSLs) and the auto-tuning methodology. The Patus stencil specification DSL allows the programmer to express a stencil computation in a concise way independently of hardware architecture-specific details. Thus, it increases the programmer productivity by disburdening her or him of low level programming model issues and of manually applying hardware platform-specific code optimization techniques. The use of domain specific languages also implies code reusability: once implemented, the same stencil specification can be reused on different hardware platforms, i.e., the specification code is portable across hardware architectures. Constructing the language to be geared towards a special purpose makes it amenable to more aggressive optimizations and therefore to potentially higher performance. Auto-tuning provides performance and performance portability by automated adaptation of implementation-specific parameters to the characteristics of the hardware on which the code will run. By automating the process of parameter tuning — which essentially amounts to solving an integer programming problem in which the objective function is the number representing the code's performance as a function of the parameter configuration, — the system can also be used more productively than if the programmer had to fine-tune the code manually. We show performance results for a variety of stencils, for which Patus was used to generate the corresponding implementations. The selection includes stencils taken from two real-world applications: a simulation of the temperature within the human body during hyperthermia cancer treatment and a seismic application. These examples demonstrate the framework's flexibility and ability to produce high performance code

    Automatic code generation and tuning for stencil kernels on modern shared memory architectures

    Get PDF
    In this paper, we present Patus, a code generation and auto-tuning framework for stencil computations targeted at multi- and manycore processors, such as multicore CPUs and graphics processing units. Patus, which stands for "Parallel Autotuned Stencils,” generates a compute kernel from a specification of the stencil operation and a strategy which describes the parallelization and optimization to be applied, and leverages the autotuning methodology to optimize strategy-specific parameters for the given hardware architectur

    Gene Transfer Agent Promotes Evolvability within the Fittest Subpopulation of a Bacterial Pathogen

    Get PDF
    The Bartonella gene transfer agent (BaGTA) is an archetypical example for domestication of a phage-derived element to permit high-frequency genetic exchange in bacterial populations. Here we used multiplexed transposon sequencing (TnSeq) and single-cell reporters to globally define the core components and transfer dynamics of BaGTA. Our systems-level analysis has identified inner- and outer-circle components of the BaGTA system, including 55 regulatory components, as well as an additional 74 and 107 components mediating donor transfer and recipient uptake functions. We show that the stringent response signal guanosine-tetraphosphate (ppGpp) restricts BaGTA induction to a subset of fast-growing cells, whereas BaGTA particle uptake depends on a functional Tol-Pal trans-envelope complex that mediates outer-membrane invagination upon cell division. Our findings suggest that Bartonella evolved an efficient strategy to promote genetic exchange within the fittest subpopulation while disfavoring exchange of deleterious genetic information, thereby facilitating genome integrity and rapid host adaptation

    Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents

    Full text link
    Resolving the scope of a negation within a sentence is a challenging NLP task. The complexity of legal texts and the lack of annotated in-domain negation corpora pose challenges for state-of-the-art (SotA) models when performing negation scope resolution on multilingual legal data. Our experiments demonstrate that models pre-trained without legal data underperform in the task of negation scope resolution. Our experiments, using language models exclusively fine-tuned on domains like literary texts and medical data, yield inferior results compared to the outcomes documented in prior cross-domain experiments. We release a new set of annotated court decisions in German, French, and Italian and use it to improve negation scope resolution in both zero-shot and multilingual settings. We achieve token-level F1-scores of up to 86.7% in our zero-shot cross-lingual experiments, where the models are trained on two languages of our legal datasets and evaluated on the third. Our multilingual experiments, where the models were trained on all available negation data and evaluated on our legal datasets, resulted in F1-scores of up to 91.1%

    Blockade but not overexpression of the junctional adhesion molecule C influences virus-induced type 1 diabetes in mice

    Get PDF
    Type 1 diabetes (T1D) results from the autoimmune destruction of insulin-producing beta-cells in the pancreas. Recruitment of inflammatory cells is prerequisite to beta-cell-injury. The junctional adhesion molecule (JAM) family proteins JAM-B and JAM–C are involved in polarized leukocyte transendothelial migration and are expressed by vascular endothelial cells of peripheral tissue and high endothelial venules in lympoid organs. Blocking of JAM-C efficiently attenuated cerulean-induced pancreatitis, rheumatoid arthritis or inflammation induced by ischemia and reperfusion in mice. In order to investigate the influence of JAM-C on trafficking and transmigration of antigen-specific, autoaggressive T-cells, we used transgenic mice that express a protein of the lymphocytic choriomeningitis virus (LCMV) as a target autoantigen in the ÎČ-cells of the islets of Langerhans under the rat insulin promoter (RIP). Such RIP-LCMV mice turn diabetic after infection with LCMV. We found that upon LCMV-infection JAM-C protein was upregulated around the islets in RIP-LCMV mice. JAM-C expression correlated with islet infiltration and functional beta-cell impairment. Blockade with a neutralizing anti-JAM-C antibody reduced the T1D incidence. However, JAM-C overexpression on endothelial cells did not accelerate diabetes in the RIP-LCMV model. In summary, our data suggest that JAM-C might be involved in the final steps of trafficking and transmigration of antigen-specific autoaggressive T-cells to the islets of Langerhans

    Adaption von etablierten QualitÀtssicherungsmethoden in der Programmierung von Low-Code und No-Code-Anwendungen : Fokus Quellcode-Analyse, Performance-Analyse und Testing mit dem modellbasierten Low-Code und No-Code-Framework Posity

    Get PDF
    Low-Code und No-Code (LCNC) ist ein neuer, schnellwachsender Ansatz zur Entwicklung von Software-Anwendungen. Der Ansatz von LCNC schliesst die LĂŒcke zwischen IT und Business und ermöglicht nicht-technischen Fachexperten, sich aktiv am Entwicklungsprozess zu beteiligen und komplette Anwendungen zu erstellen. Die Entwicklungsumgebungen verbergen komplexe AblĂ€ufe vor den Programmierenden, indem sie vorgefertigte Softwarebausteine im Baukastenformat anbieten. Mit LCNC werden neue Konzepte und Eigenschaften eingefĂŒhrt. Allerdings ist in der Wissenschaft noch nicht viel zu Herausforderungen und Methoden der QualitĂ€tssicherung in der Programmierung von LCNC-Anwendungen untersucht worden. Diese ForschungslĂŒcke motiviert diese Arbeit dazu, ausgewĂ€hlte QualitĂ€tssicherungsmethoden aus der traditionellen Softwareentwicklung (Performance-Analyse, Quellcode-Analyse und Testing) auf deren Eignung zur QualitĂ€tssicherung fĂŒr LCNC zu prĂŒfen. Initial wird in dieser Arbeit die Wissensbasis der Forschung durch eine Literaturrecherche analysiert und die Probleme und Anforderungen der AnwendungsdomĂ€ne anhand der DurchfĂŒhrung einer Fokus-Gruppe mit Vertretern aus der Praxis und der Forschung identifiziert. Basierend auf dem Stand der Forschung und den identifizierten Anforderungen wird fĂŒr die Performance-Analyse und die Quellcode-Analyse je ein Software-Artefakt entwickelt und fĂŒr das Testing werden zwei konzeptionelle Artefakte fĂŒr unterschiedliche Testverfahren, im Kontext der kommerziellen LCNC-Entwicklungsplattform Posity, als Stellvertreter der LCNC-Plattformen, erstellt. Das Resultat beurteilt die erstellten Artefakte anhand der, in der Analyse, ermittelten Erfolgskriterien, bewertet die Eignung der untersuchten Methoden in der Anwendung fĂŒr LCNC-Programmierung und formuliert Handlungsempfehlungen fĂŒr Praxis und Forschung. Die Validierung aller drei untersuchten QualitĂ€tssicherungsmethoden (Performance-Analyse, Quellcode-Analyse und Testing) haben ergeben, dass eine Verwendung fĂŒr LCNC-Programmierung einerseits möglich ist und andererseits aus den Analyse- und Test-Resultaten qualitĂ€tssichernde Massnahmen abgeleitet werden könnten. Die AllgemeingĂŒltigkeit der Ergebnisse sind insofern limitiert, als dass sie im Kontext von nur einer Plattform, stellvertretend fĂŒr alle LCNC-Plattformen, validiert worden sind. Die Erkenntnisse lassen jedoch folgende generalisierbare Schlussfolgerung zu. Um eine Analyse, auf eine in LCNC-Code erstellten Anwendung, ausfĂŒhren zu können, ist es unumgĂ€nglich, dass die Plattformhersteller entsprechende Werkzeuge anbieten und dass sich das Ergebnis einer Analyse auf der gleichen Abstraktionsstufe befindet, wie der «Code» selbst. Nur so kann der Entwickler Analysen und Tests erstellen, die Resultate einordnen und entsprechende Massnahmen ergreifen, ohne dass tiefergehende Programmierkenntnisse notwendig sind. Die Vielfalt der Plattformen und das Fehlen von Standards, hat zur Folge, dass jeder Hersteller eine eigene Lösung fĂŒr die Integration dieser Techniken bauen muss. Quantitative Performance-Messgrössen, wie die AusfĂŒhrungszeit, konnten in verschiedenen Bereichen einer LCNC-Anwendung erfasst werden. Das Identifizieren von Performance-Problemen bedingt eine geeignete Visualisierung. Die in der Softwareentwicklung bekannte Darstellungsform Call-Tree wurde dafĂŒr als Ă€usserst geeignet beurteilt. Ein AnknĂŒpfungspunkt fĂŒr weitere Forschung wĂ€re die Untersuchung von Methoden oder Verfahren, die fĂŒr Performance-Probleme LösungsvorschlĂ€ge bieten, oder diese sogar automatisiert beheben könnten. FĂŒr die Quell-Codeanalyse im Speziellen, ist die Eignung der Methode wesentlich von den zur VerfĂŒgung stehenden Regeln und Metriken abhĂ€ngig. In dieser Arbeit wurden ausgewĂ€hlte Regeln, aus der traditionellen Programmierung, untersucht und implementiert. Einen ausgereiften und allgemeingĂŒltigen Satz an Regeln gibt es bis anhin nicht und ist Gegenstand weiterer Forschungsarbeit. Durch Testen können völlig unerwartete Fehler aufgedeckt werden, die nur durch das AusfĂŒhren der Programme erkennbar werden. Deshalb ist das Testing eine wichtige Methode zur QualitĂ€tssicherung. Die Arbeit analysiert spezifikationsorientierte und diversifizierende Testverfahren, als zwei verbreitete Testtechniken. Beide Testverfahren wurden als gleichermassen geeignet fĂŒr den Einsatz in der LCNC-Programmierung validiert. Die Ergebnisse dieser Forschung können als Grundlage fĂŒr die Implementierung der QualitĂ€tssicherungsmethoden fĂŒr andere Plattform-Hersteller und als Leitfaden fĂŒr die Unter-suchung und Entwicklung weiterer Methoden fĂŒr die Forschung verwendet werden

    Parallel scalable PDE-constrained optimization: antenna identification in hyperthermia cancer treatment planning

    Get PDF
    We present aPDE-constrained optimization algorithm which is designed for parallel scalability on distributed-memory architectures with thousands of cores. The method is based on aline-search interior-point algorithm for large-scale continuous optimization, it is matrix-free in that it does not require the factorization of derivative matrices. Instead, it uses anew parallel and robust iterative linear solver on distributed-memory architectures. We will show almost linear parallel scalability results for the complete optimization problem, which is anew emerging important biomedical application and is related to antenna identification in hyperthermia cancer treatment plannin

    FYCO1 Frameshift Deletion in Wirehaired Pointing Griffon Dogs with Juvenile Cataract

    Get PDF
    Different breed-specific inherited cataracts have been described in dogs. In this study, we investigated an inbred family of Wirehaired Pointing Griffon dogs in which three offspring were affected by juvenile cataract. The pedigree suggested monogenic autosomal recessive inheritance of the trait. Whole-genome sequencing of an affected dog revealed 12 protein-changing variants that were not present in 566 control genomes, of which two were located in functional candidate genes, FYCO1 and CRYGB. Targeted genotyping of both variants in the investigated family excluded CRYGB and revealed perfect co-segregation of the FYCO1 variant with the juvenile cataract phenotype. This variant, FYCO1:c.2024delG, represents a 1 bp frameshift deletion predicted to truncate ~50 of the open reading frame p.(Ser675Thrfs*5). FYCO1 encodes the FYVE and coiled-coil domain autophagy adaptor 1, a known regulator of lens autophagy, which is required for the normal homeostasis in the eye. In humans, at least 37 pathogenic variants in FYCO1 have been shown to cause autosomal recessive cataract. Fcyo1−/− knockout mice also develop cataracts. Together with the current knowledge on FYCO1 variants and their functional impact in humans and mice, our data strongly suggest FYCO1:c.2024delG as a candidate causative variant for the observed juvenile cataract in Wirehaired Pointing Griffon dogs. To the best of our knowledge, this study represents the first report of a FYCO1-related cataract in domestic animals

    Image-Less THA Cup Navigation in Clinical Routine Setup: Individual Adjustments, Accuracy, Precision, and Robustness.

    Get PDF
    Background and Objectives: Even after the 'death' of Lewinnek's safe zone, the orientation of the prosthetic cup in total hip arthroplasty is crucial for success. Accurate cup placement can be achieved with surgical navigation systems. The literature lacks study cohorts with large numbers of hips because postoperative computer tomography is required for the reproducible evaluation of the acetabular component position. To overcome this limitation, we used a validated software program, HipMatch, to accurately assess the cup orientation based on an anterior-posterior pelvic X-ray. The aim of this study were to (1) determine the intraoperative 'individual adjustment' of the cup positioning compared to the widely suggested target values of 40° of inclination and 15° of anteversion, and evaluate the (2) 'accuracy', (3) 'precision', and (4) robustness, regarding systematic errors, of an image-free navigation system in routine clinical use. Material and Methods: We performed a retrospective, accuracy study in a single surgeon case series of 367 navigated primary total hip arthroplasties (PiGalileoTM, Smith+Nephew) through an anterolateral approach performed between January 2011 and August 2018. The individual adjustments were defined as the differences between the target cup orientation (40° of inclination, 15° of anteversion) and the intraoperative registration with the navigation software. The accuracy was the difference between the intraoperative captured cup orientation and the actual postoperative cup orientation determined by HipMatch. The precision was analyzed by the standard deviation of the difference between the intraoperative registered and the actual cup orientation. The outliers were detected using the Tukey method. Results: Compared to the target value (40° inclination, 15° anteversion), the individual adjustments showed that the cups are impacted in higher inclination (mean 3.2° ± 1.6°, range, (-2)-18°) and higher anteversion (mean 5.0° ± 7.0°, range, (-15)-23°) (p < 0.001). The accuracy of the navigated cup placement was -1.7° ± 3.0°, ((-15)-11°) for inclination, and -4.9° ± 6.2° ((-28)-18°) for anteversion (p < 0.001). Precision of the system was higher for inclination (standard deviation SD 3.0°) compared to anteversion (SD 6.2°) (p < 0.001). We found no difference in the prevalence of outliers for inclination (1.9% (7 out of 367)) compared to anteversion (1.63% (6 out of 367), p = 0.78). The Bland-Altman analysis showed that the differences between the intraoperatively captured final position and the postoperatively determined actual position were spread evenly and randomly for inclination and anteversion. Conclusion: The evaluation of an image-less navigation system in this large study cohort provides accurate and reliable intraoperative feedback. The accuracy and the precision were inferior compared to CT-based navigation systems particularly regarding the anteversion. However the assessed values are certainly within a clinically acceptable range. This use of image-less navigation offers an additional tool to address challenging hip prothesis in the context of the hip-spine relationship to achieve adequate placement of the acetabular components with a minimum of outliers
    • 

    corecore